12 research outputs found

    Using eigenvoices and nearest-neighbours in HMM-based cross-lingual speaker adaptation with limited data

    Get PDF
    Cross-lingual speaker adaptation for speech synthesis has many applications, such as use in speech-to-speech translation systems. Here, we focus on cross-lingual adaptation for statistical speech synthesis systems using limited adaptation data. To that end, we propose two eigenvoice adaptation approaches exploiting a bilingual Turkish-English speech database that we collected. In one approach, eigenvoice weights extracted using Turkish adaptation data and Turkish voice models are transformed into the eigenvoice weights for the English voice models using linear regression. Weighting the samples depending on the distance of reference speakers to target speakers during linear regression was found to improve the performance. Moreover, importance weighting the elements of the eigenvectors during regression further improved the performance. The second approach proposed here is speaker-specific state-mapping, which performed significantly better than the baseline state-mapping algorithm both in objective and subjective tests. Performance of the proposed state mapping algorithm was further improved when it was used with the intralingual eigenvoice approach instead of the linear-regression based algorithms used in the baseline system.European Commission ; TUBITA

    ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications

    Full text link
    Personal assistants, automatic speech recognizers and dialogue understanding systems are becoming more critical in our interconnected digital world. A clear example is air traffic control (ATC) communications. ATC aims at guiding aircraft and controlling the airspace in a safe and optimal manner. These voice-based dialogues are carried between an air traffic controller (ATCO) and pilots via very-high frequency radio channels. In order to incorporate these novel technologies into ATC (low-resource domain), large-scale annotated datasets are required to develop the data-driven AI systems. Two examples are automatic speech recognition (ASR) and natural language understanding (NLU). In this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering research on the challenging ATC field, which has lagged behind due to lack of annotated data. The ATCO2 corpus covers 1) data collection and pre-processing, 2) pseudo-annotations of speech data, and 3) extraction of ATC-related named entities. The ATCO2 corpus is split into three subsets. 1) ATCO2-test-set corpus contains 4 hours of ATC speech with manual transcripts and a subset with gold annotations for named-entity recognition (callsign, command, value). 2) The ATCO2-PL-set corpus consists of 5281 hours of unlabeled ATC data enriched with automatic transcripts from an in-domain speech recognizer, contextual information, speaker turn information, signal-to-noise ratio estimate and English language detection score per sample. Both available for purchase through ELDA at http://catalog.elra.info/en-us/repository/browse/ELRA-S0484. 3) The ATCO2-test-set-1h corpus is a one-hour subset from the original test set corpus, that we are offering for free at https://www.atco2.org/data. We expect the ATCO2 corpus will foster research on robust ASR and NLU not only in the field of ATC communications but also in the general research community.Comment: Manuscript under review; The code will be available at https://github.com/idiap/atco2-corpu

    Sınırlı veriyle HMM tabanlı çapraz-dil konuşmacı uyarlamasında özses ve en yakın komşu kullanımı

    No full text
    Thesis (M.A.)--Özyeğin University, Graduate School of Sciences and Engineering, Department of Computer Science, August 2017.Thesis abstract: Cross-lingual speaker adaptation for speech synthesis has many applications, such as use in speech-to-speech translation systems. Here, we focus on cross-lingual adaptation for statistical speech synthesis systems using limited adaptation data. We propose new methods on HMM-based and DNN-based speech synthesis. To that end, for HMM-based speech synthesis we propose two eigenvoice adaptation approaches exploiting a bilingual Turkish-English speech database that we collected. In one approach, eigenvoice weights extracted using Turkish adaptation data and Turkish voice models are transformed into the eigenvoice weights for the English voice models using linear regression. Weighting the samples depending on the distance of reference speakers to target speakers during linear regression was found to improve the performance. Moreover, importance weighting the elements of the eigenvectors during regression further improved the performance. The second approach proposed here is speaker-specific state-mapping which performed signicantly better than the baseline state-mapping algorithm both in objective and subjective tests. Performance of the proposed state mapping algorithm was further improved when it was used with the intra-lingual eigenvoice approach instead of the linear-regression based algorithms used in the baseline system. We propose new unsupervised adaptation method for DNN-based speech synthesis. In this method, using sequence of acoustic features from target speaker, we estimate continuous linguistic features for unlabeled data. Based on objective and subjective experiments, adapted model outperformed the gender-dependent average voice models in terms of quality and similarity.Ses sentezi için çapraz-dilli konuşmacıya uyarlanma, sesten sese çeviri sistemleri gibi birçok kullanım alanına sahiptir. Bu tezde, sınırlı uyarlama verilerini kullanan istatistiksel konuşma sentezi sistemleri için çapraz-dilli uyarlamaya odaklanılmış ve HMM-/DNN-tabanlı konuşma sentezinde yeni yöntemler önerilmiştir. Bu amaçla, topladığımız iki dilli bir Türkçe-İngilizce konuşma veritabanını kullanarak HMM-tabanlı konuşma sentezi için, iki özses uyarlama yaklaşımı önermekteyiz. Bir yaklaşımda, Türkçe uyarlama verileri ve Türkçe ses modeli kullanılarak çıkarılan özses ağırlıkları doğrusal bağlanım kullanılarak İngilizce ses modelleri için özses ağırlıklarına dönüştürülmüştür. Doğrusal bağlanım esnasında referans konuşmacıların hedef konuşmacılara olan mesafesine bağlı olarak örneklerin ağırlıklandırılmasının performansı arttırdığı gözlemlenmiştir. Dahası, bağlanım sırasında özvektörlerin elemanlarının önem ağırlıklandırılması performansı daha da geliştirmiştir. Burada önerilen ikinci yaklaşım temel sistem olan durumharitalama algoritmasından hem nesnel hem de öznel testlerde daha iyi performans gösteren konuşmacıya özel durumharitalamasıdır. Temel sistemde kullanılan doğrusal bağlanım temelli algoritmalar yerine dil içi öz ses yaklaşımı ile birlikte kullanıldığında, önerilen durumharitası algoritmasının performansı daha da artmıştır. Hızlı uyarlanma yöntemlerinin yanında, çapraz-dilli, DNN-tabanlı konuşma sentezi için bir güdümsüz uyarlama yöntemi önerilmiştir. Bu yöntemde, hedef konuşmacının akustik özellik dizisi kullanılarak, etiketlenmemiş veriler için sürekli dil özellikleri tahmin edilmiştir. Hem nesnel hem de öznel deney sonuçlarında, uyarlanan modelin cinsiyete bağlı ortalama ses modellerini kalite ve benzerlik açısından geçtiği gözlenmiştir

    Cross-lingual speaker adaptation for statistical speech synthesis using limited data

    No full text
    Cross-lingual speaker adaptation with limited adaptation data has many applications such as use in speech-to-speech translation systems. Here, we focus on cross-lingual adaptation for statistical speech synthesis (SSS) systems using limited adaptation data. To that end, we propose two techniques exploiting a bilingual Turkish-English speech database that we collected. In one approach, speaker-specific state-mapping is proposed for cross-lingual adaptation which performed significantly better than the baseline state-mapping algorithm in adapting the excitation parameter both in objective and subjective tests. In the second approach, eigenvoice adaptation is done in the input language which is then used to estimate the eigenvoice weights in the output language using weighted linear regression. The second approach performed significantly better than the baseline system in adapting the spectral envelope parameters both in objective and subjective tests

    Eigenvoice speaker adaptation with minimal data for statistical speech synthesis systems using a MAP approach and nearest-neighbors

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.Statistical speech synthesis (SSS) systems have the ability to adapt to a target speaker with a couple of minutes of adaptation data. Developing adaptation algorithms to further reduce the number of adaptation utterances to a few seconds of data can have substantial effect on the deployment of the technology in real-life applications such as consumer electronics devices. The traditional way to achieve such rapid adaptation is the eigenvoice technique which works well in speech recognition but known to generate perceptual artifacts in statistical speech synthesis. Here, we propose three methods to alleviate the quality problems of the baseline eigenvoice adaptation algorithm while allowing speaker adaptation with minimal data. Our first method is based on using a Bayesian eigenvoice approach for constraining the adaptation algorithm to move in realistic directions in the speaker space to reduce artifacts. Our second method is based on finding pre-trained reference speakers that are close to the target speaker and utilizing only those reference speaker models in a second eigenvoice adaptation iteration. Both techniques performed significantly better than the baseline eigenvoice method in the objective tests. Similarly, they both improved the speech quality in subjective tests compared to the baseline eigenvoice method. In the third method, tandem use of the proposed eigenvoice method with a state-of-the-art linear regression based adaptation technique is found to improve adaptation of excitation features.TÜBİTAK ; European Commissio

    BERTRAFFIC: BERT-BASED JOINT SPEAKER ROLE AND SPEAKER CHANGE DETECTION FOR AIR TRAFFIC CONTROL COMMUNICATIONS

    No full text
    Automatic speech recognition (ASR) allows transcribing the communications between air traffic controllers (ATCOs) and aircraft pilots. The transcriptions are used later to extract ATC named entities, e.g., aircraft callsigns. One common challenge is speech activity detection (SAD) and speaker diarization (SD). In the failure condition, two or more segments remain in the same recording, jeopardizing the overall performance. We propose a system that combines SAD and a BERT model to perform speaker change detection and speaker role detection (SRD) by chunking ASR transcripts, i.e., SD with a defined number of speakers together with SRD. The proposed model is evaluated on real-life public ATC databases. Our BERT SD model baseline reaches up to 10% and 20% token-based Jaccard error rate (JER) in public and private ATC databases. We also achieved relative improvements of 32% and 7.7% in JERs and SD error ra

    Apron Controller Support by Integration of Automatic Speech Recognition with an Advanced Surface Movement Guidance and Control System

    No full text
    Digital assistants in air traffic control today have access to a large number of sensors that allow monitoring of traffic in the air and on the ground. Voice communication between air traffic controller and pilot, however, is not used by these assistants. Whenever the information from voice communication has to be digitized, controllers are burdened to enter the information manually. Research shows that up to one third of controllers working time is spent on these manual inputs. Assistant Based Speech Recognition (ABSR) has already shown that it can reduce the amount of manual inputs from controllers. This paper presents how a modern digital assistant, a so-called A-SMGCS, can utilize the outputs of ABSR. The combined application is installed in the complex apron simulation training environment of the Frankfurt airport. This allows on the one hand the integration of recognized controller commands into the A-SMGCS planning process. On the other hand, ABSR performance is improved through the usage of A-SMGCS information. The implemented ABSR system alone reaches Word Error Rates of 3.1% for the text recognition process, which results in a callsign recognition rate of 97.4% and a command recognition rate of 91.8%. The integration of ABSR in the A-SMGCS brings a reduction of workload for controllers, which increases the overall performance and safety
    corecore